The boxes aren’t filled by Omega. Omega comes and tell you: “If you one-box, you’ll get a million. If you two-box, you’ll only get a thousand.”
One of the lessons of the discussion of the smoking lesion problem in the FDT paper, and of the murder lesion problem in Cheating Death in Damascus, is that FDT (like CDT) reasons with something like a causal graph of the decision problem it’s in, and if multiple graphs are consistent with the problem description, then FDT’s behavior may be underdetermined by the problem description. In this case, I can think of at least three causal stories you might tell for how Omega’s claim ends up being true, along with some potential resolutions to ambiguities in what Omega’s saying:
Although Omega itself didn’t fill the boxes, Omega observed some other process (we might call it “Omega+1”) that in some fashion computes the agent’s decision function in advance, and fills the boxes based on this function’s output.
The boxes are connected to an explosive device that’s set up to detect if the agent touches one of the boxes, and to destroy the contents of the other box as soon as the first touch is detected.
Before deciding whether to approach the agent, Omega first observed whether the opaque box had $1,000,000 or not. If the opaque box happened to be empty, then Omega approached the agent and told them, “If you one-box, you’ll get a million. If you two-box, you’ll only get a thousand,” knowing that the agent would recognize that it’s purely a coincidence which box is full and that Omega must therefore just be joking around and trying to find roundabout ways to communicate “I checked, and the opaque box is empty.” The two sentences are vacuously true in the manner of material implications, because the antecedent of the first is known to be false and the consequent of the second is known to be true. Having received the information that the opaque box is already empty and would have been empty regardless of what decision it made, the agent can confidently take the $1000, per the second sentence.
If the agent knows they’re in the first scenario, then they’ll reason the same as in Newcomb’s problem, which this is a trivial variant on: FDT and EDT one-box, and CDT two-boxes. If the agent knows they’re in the second scenario, then all three agents one-box. If the agent knows they’re in the third scenario, then all three agents two-box (i.e., take the $1000).
If the agent is uncertain about which of those scenarios they’re in, then their decision will be based on what probability they assign to being in this or that scenario.
XOR Newcomb’s Problem
The boxes aren’t filled by Omega. Omega comes and tell you: “If you one-box, you’ll get a million. If you two-box, you’ll only get a thousand.”
Although Omega itself didn’t fill the boxes, Omega observed some other process (we might call it “Omega+1”) that in some fashion computes the agent’s decision function in advance, and fills the boxes based on this function’s output.
The boxes are connected to an explosive device that’s set up to detect if the agent touches one of the boxes, and to destroy the contents of the other box as soon as the first touch is detected.
Before deciding whether to approach the agent, Omega first observed whether the opaque box had $1,000,000 or not. If the opaque box happened to be empty, then Omega approached the agent and told them, “If you one-box, you’ll get a million. If you two-box, you’ll only get a thousand,” knowing that the agent would recognize that it’s purely a coincidence which box is full and that Omega must therefore just be joking around and trying to find roundabout ways to communicate “I checked, and the opaque box is empty.” The two sentences are vacuously true in the manner of material implications, because the antecedent of the first is known to be false and the consequent of the second is known to be true. Having received the information that the opaque box is already empty and would have been empty regardless of what decision it made, the agent can confidently take the $1000, per the second sentence.
If the agent knows they’re in the first scenario, then they’ll reason the same as in Newcomb’s problem, which this is a trivial variant on: FDT and EDT one-box, and CDT two-boxes. If the agent knows they’re in the second scenario, then all three agents one-box. If the agent knows they’re in the third scenario, then all three agents two-box (i.e., take the $1000).
If the agent is uncertain about which of those scenarios they’re in, then their decision will be based on what probability they assign to being in this or that scenario.